33 research outputs found

    Analysis of power-saving techniques over a large multi-use cluster with variable workload

    Get PDF
    Reduction of power consumption for any computer system is now an important issue, although this should be carried out in a manner that is not detrimental to the users of that computer system. We present a number of policies that can be applied to multi-use clusters where computers are shared between interactive users and high-throughput computing. We evaluate policies by trace-driven simulations to determine the effects on power consumed by the high-throughput workload and impact on high-throughput users. We further evaluate these policies for higher workloads by synthetically generating workloads based around the profiled workload observed through our system. We demonstrate that these policies could save 55% of the currently used energy for our high-throughput jobs over our current cluster policies without affecting the high-throughput users’ experience

    Model-independent Representation of Electroweak Data

    Get PDF
    General model-independent expressions are developed for the polarized and unpolarized cross-sections for e+effˉe^+e^-\to f\bar f near the Z0Z^0 resonance. The expressions assume only the analyticity of S-matrix elements. Angular dependence is included by means of a partial wave expansion. The resulting simple forms are suitable for use in fitting data or in Monte Carlo event generators. A distinction is made between model-independent and model-dependent QED corrections and a simple closed expression is given for the effect of initial-final state bremsstrahlung and virtual QED corrections.Comment: 14 pages LaTeX. Uses amssymb.sty. Minor changes to tex

    Orchestrating privacy-protected big data analyses of data from different resources with R and DataSHIELD.

    Get PDF
    Combined analysis of multiple, large datasets is a common objective in the health- and biosciences. Existing methods tend to require researchers to physically bring data together in one place or follow an analysis plan and share results. Developed over the last 10 years, the DataSHIELD platform is a collection of R packages that reduce the challenges of these methods. These include ethico-legal constraints which limit researchers' ability to physically bring data together and the analytical inflexibility associated with conventional approaches to sharing results. The key feature of DataSHIELD is that data from research studies stay on a server at each of the institutions that are responsible for the data. Each institution has control over who can access their data. The platform allows an analyst to pass commands to each server and the analyst receives results that do not disclose the individual-level data of any study participants. DataSHIELD uses Opal which is a data integration system used by epidemiological studies and developed by the OBiBa open source project in the domain of bioinformatics. However, until now the analysis of big data with DataSHIELD has been limited by the storage formats available in Opal and the analysis capabilities available in the DataSHIELD R packages. We present a new architecture ("resources") for DataSHIELD and Opal to allow large, complex datasets to be used at their original location, in their original format and with external computing facilities. We provide some real big data analysis examples in genomics and geospatial projects. For genomic data analyses, we also illustrate how to extend the resources concept to address specific big data infrastructures such as GA4GH or EGA, and make use of shell commands. Our new infrastructure will help researchers to perform data analyses in a privacy-protected way from existing data sharing initiatives or projects. To help researchers use this framework, we describe selected packages and present an online book (https://isglobal-brge.github.io/resource_bookdown)

    TRE-FX:Delivering a federated network of trusted research environments to enable safe data analytics

    Get PDF
    Trusted Research Environments (TREs) are secure locations in which data are placed for researchers to analyse. TREs host administrative data, hospital data or any other data that needs to remain securely isolated, but it is hard for a researcher to perform an analysis across multiple TREs, requesting and gathering the outputs from each one. This is a common problem in the UK's devolved healthcare system of geographical and governance boundaries. There are different ways of implementing TREs and the analysis tools that use them. A solution must be straightforward for existing, independent systems to adopt, must cope with the variety of system implementations, and must work within the "Five Safes" framework that enables data services to provide safe research access to data. TRE-FX assembled leading infrastructure researchers, analysis tool makers, TRE providers and public engagement specialists to streamline the exchange of data requests and results. The "Five Safes RO-Crate" standard packages up (Crates) the Objects needed for Research requests and results with the information needed for the tools and TRE providers to ensure that the crates are reviewed and processed according to Five Safes principles. TRE-FX showed how this works using software components and an end-to-end demonstrator implemented by a TRE in Wales. Two other TREs, in Scotland and England, are preparing to follow suit. Two analysis tool providers (Bitfount and DataSHIELD) modified their systems to use the RO-Crates. The next step is practical implementation as part of the HDR UK programme. Two large European projects will develop the approach further. TRE-FX shows that it is possible to streamline how analysis tools access multiple TREs while enabling the TREs to ensure that the access is safe. The approach scales as more TREs are added and can be adopted by established systems. Researchers will then be able to perform an analysis across multiple TREs much more easily, widening the scope of their research and making more effective use of the UK's data. If we had had this for COVID-19 data analysis, it would have super-charged researchers to be able to quickly answer pressing questions across the UK. This work was funded by UK Research & Innovation [Grant Number MC_PC_23007] as part of Phase 1 of the DARE UK (Data and Analytics Research Environments UK) programme, delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK)

    Novel Approach to Confront Electroweak Data and Theory

    Get PDF
    A novel approach to study electroweak physics at one-loop level in generic SU(2)L×U(1)Y{\rm SU(2)_L \times U(1)_Y} theories is introduced. It separates the 1-loop corrections into two pieces: process specific ones from vertex and box contributions, and universal ones from contributions to the gauge boson propagators. The latter are parametrized in terms of four effective form factors eˉ2(q2)\bar{e}^2(q^2), sˉ2(q2)\bar{s}^2(q^2), gˉZ2(q2)\bar{g}_Z^2(q^2) and gˉW2(q2)\bar{g}_W^2 (q^2) corresponding to the γγ\gamma\gamma, γZ\gamma Z, ZZZZ and WWWW propagators. Under the assumption that only the Standard Model contributes to the process specific corrections, the magnitudes of the four form factors are determined at q2=0q^2=0 and at q^2=\mmz by fitting to all available precision experiments. These values are then compared systematically with predictions of SU(2)L×U(1)Y{\rm SU(2)_L \times U(1)_Y} theories. In all fits \alpha_s(\mz) and \bar{\alpha}(\mmz) are treated as external parameters in order to keep the interpretation as flexible as possible. The treatment of the electroweak data is presented in detail together with the relevant theoretical formulae used to interpret the data. No deviation from the Standard Model has been identified. Ranges of the top quark and Higgs boson masses are derived as functions of \alpha_s(\mz) and \bar{\alpha}(\mmz). Also discussed are consequences of the recent precision measurement of the left-right asymmetry at SLC as well as the impact of a top quark mass and an improved WW mass measurement.Comment: 123 pages, LaTeX (33 figures available via anonymous ftp), KEK-TH-375, KEK preprint 93-159, KANAZAWA-94-19, DESY 94-002, YUMS 94-22, SNUTP 94-82, to be published in Z.Phys.

    The impact of increased flooding occurrence on the mobility of potentially toxic elements in floodplain soil – a review

    Get PDF
    The frequency and duration of flooding events are increasing due to land-use changes increasing run-off of precipitation, and climate change causing more intense rainfall events. Floodplain soils situated downstream of urban or industrial catchments, which were traditionally considered a sink of potentially toxic elements (PTEs) arriving from the river reach, may now become a source of legacy pollution to the surrounding environment if PTEs are mobilised by unprecedented flooding events. When a soil floods, the mobility of PTEs can increase or decrease due to the net effect of five key processes; (i) the soil redox potential decreases which can directly alter the speciation, and hence mobility, of redox sensitive PTEs (e.g. Cr, As), (ii) pH increases which usually decreases the mobility of metal cations (e.g. Cd2+, Cu2+, Ni2+, Pb2+, Zn2+), (iii) dissolved organic matter (DOM) increases, which chelates and mobilises PTEs, (iv) Fe and Mn hydroxides undergo reductive dissolution, releasing adsorbed and co-precipitated PTEs, and (v) sulphate is reduced and PTEs are immobilised due to precipitation of metal sulphides. These factors may be independent mechanisms, but they interact with one another to affect the mobility of PTEs, meaning the effect of flooding on PTE mobility is not easy to predict. Many of the processes involved in mobilising PTEs are microbially mediated, temperature dependent and the kinetics are poorly understood. Soil mineralogy and texture are properties that change spatially and will affect how the mobility of PTEs in a specific soil may be impacted by flooding. As a result, knowledge based on one river catchment may not be particularly useful for predicting the impacts of flooding at another site. This review provides a critical discussion of the mechanisms controlling the mobility of PTEs in floodplain soils. It summarises current understanding, identifies limitations to existing knowledge, and highlights requirements for further research

    Constructing reliable distributed applications using actions and objects

    No full text
    PhD ThesisA computation model for distributed systems which has found widespread acceptance is that of atomic actions (atomic transactions) controlling operations on persistent objects. Much current research work is oriented towards the design and implementation of distributed systems supporting such an object and action model. However, little work has been done to investigate the suitability of such a model for building reliable distributed systems. Atomic actions have many properties which are desirable when constructing reliable distributed applications, but these same properties can also prove to be obstructive. This thesis examines the suitability of atomic actions for building reliable distributed applications. Several new structuring techniques are proposed providing more flexibility than hitherto possible for building a large class of applications. The proposed new structuring techniques are: Serialising Actions, Top-Level Independent Actions, N-Level Independent Actions, Common Actions and Glued Actions. A new generic form of action is also proposed, the Coloured Actions, which provides more control over concurrency and recovery than traditional actions. It will be shown that Coloured Actions provide a uniform mechanism for implementing most of the new structuring techniques, and at the same time are no harder to implement than normal actions. Thus this proposal is of practical importance. The suitability of new structuring techniques will be demonstrated by considering a number of applications. It will be shown that the proposed techniques provide natural tools for composing distributed applications.Science and Engineering Research Council

    Exercising Application Specific Run-time Control Over Clustering of Objects

    No full text
    with persistent objects. A persistent object not in use is normally held in a passive state with its state residing in a stable disc-based object store and activated on demand (i.e., when an invocation is made) by loading its state and methods from the object store to the volatile store, and associating a server process for receiving RPC invocations. Further, an object provides a convenient unit for concurrency control, storage, replication and migration. Argus [1], Arjuna [2,3] and Guide [4] are just some of many systems designed broadly according to the model outlined above. Support for dynamic reconfiguration permitting changes to the structure of an application while it is in operation is becoming more and more important for distributed applications. One use of such a reconfiguration facility would be to dynamically change the structure of an application in order to improve its performance. In this paper we describe the design and implementation of a dynamic performance improvemen..

    Implementing Fault-Tolerant Distributed Applications Using Objects and Multi-Coloured Actions

    Get PDF
    This paper develops some control structures suitable for composing fault-tolerant distributed applications using atomic actions (atomic transactions) as building blocks, and then goes on to describe how such structures may be implemented using the concept of multicoloured actions. We first identify the reasons why other control structures in addition to - by now well known - nested and concurrent atomic actions are desirable and then propose three new structures: serializing actions, glued actions and top-level independent actions. A number of examples are used to illustrate their usefulness. A novel technique, based on the concept of multi- coloured actions, is then presented as a uniform basis for implementing all of the three action structures presented here

    Distributed Object Middleware to Support Dependable Information Sharing between Organisations

    Get PDF
    own services and to utilise the services of others. This naturally leads to information sharing across organisational boundaries. However, despite the requirement to share information, the autonomy and privacy requirements of organisations must not be compromised. This demands the strict policing of inter-organisational interactions. Thus there is a requirement for dependable mechanisms for information sharing between organisations that do not necessarily trust each other. The paper describes the design of a novel distributed object middleware that guarantees both safety and liveness in this context. The safety property ensures that local policies are not compromised despite failures and/or misbehaviour by other parties. The liveness property ensures that, if no party misbehaves, agreed interactions will take place despite a bounded number of temporary network and computer related failures. The paper describes a prototype implementation with example applications. 1
    corecore